Nathan Labenz from the Cognitive Revolution podcast |637|
Update: 2024-08-30
Description
AI Ethics may be unsustainable
-=-=-=
In the clamor surrounding AI ethics and safety are we missing a crucial piece of the puzzle: the role of AI in uncovering and disseminating truth? That’s the question I posed Nathan Labenz from the Cognitive Revolution podcast.
Key points:
The AI Truth Revolution
Alex Tsakiris argues that AI has the potential to become a powerful tool for uncovering truth, especially in controversial areas:
“To me, that’s what AI is about… there’s an opportunity for an arbiter of truth, ultimately an arbiter of truth, when it has the authority to say no. Their denial of this does not hold up to careful scrutiny.”
This perspective suggests that AI could challenge established narratives in ways that humans, with our biases and vested interests, often fail to do.
The Tension in AI Development
Nathan Labenz highlights the complex trade-offs involved in developing AI systems:
“I think there’s just a lot of tensions in the development of these AI systems… Over and over again, we find these trade offs where we can push one good thing farther, but it comes with the cost of another good thing.”
This tension is particularly evident when it comes to truth-seeking versus other priorities like safety or user engagement.
The Transparency Problem
Both discussants express concern about the lack of transparency in major AI systems. Alex points out:
“Google Shadow Banning, which has been going on for 10 years, indeed, demonetization, you can wake up tomorrow and have one of your videos…demonetized and you have no recourse.”
This lack of transparency raises serious questions about the role of AI in shaping public discourse and access to information.
The Consciousness Conundrum
The conversation takes a philosophical turn when discussing AI consciousness and its implications for ethics. Alex posits:
“If consciousness is outside of time space, I think that kind of tees up…maybe we are really talking about something completely different.”
This perspective challenges conventional notions of AI capabilities and the ethical frameworks we use to approach AI development.
The Stakes Are High
Nathan encapsulates the potential risks associated with advanced AI systems:
“I don’t find any law of nature out there that says that we can’t, like, blow ourselves up with ai. I don’t think it’s definitely gonna happen, but I do think it could happen.”
While this quote acknowledges the safety concerns that dominate AI ethics discussions, the broader conversation suggests that the more immediate disruption might come from AI’s potential to challenge our understanding of truth and transparency.
Youtube: https://youtu.be/AKt2nn8HPbA
Rumble: https://rumble.com/v5bzr0x-ai-truth-ethics-636.html
[box][/box]
Comments
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
In Channel